Lidar (, also LIDAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranging by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. Lidar may operate in a fixed direction (e.g., vertical) or it may scan multiple directions, in a special combination of 3D scanning and laser scanning.
Lidar has terrestrial, airborne, and mobile applications. It is commonly used to make high-resolution maps, with applications in surveying, geodesy, geomatics, archaeology, geography, geology, geomorphology, seismology, forestry, atmospheric physics,
The evolution of quantum technology has given rise to the emergence of Quantum Lidar, demonstrating higher efficiency and sensitivity when compared to conventional lidar systems.
Under the direction of Malcolm Stitch, the Hughes Aircraft Company introduced the first lidar-like system in 1961, shortly after the invention of the laser. Intended for satellite tracking, this system combined laser-focused imaging with the ability to calculate distances by measuring the time for a signal to return using appropriate sensors and data acquisition electronics. It was originally called "Colidar" an acronym for "coherent light detecting and ranging", derived from the term "radar", itself an acronym for "radio detection and ranging". All laser rangefinders, laser altimeters and lidar units are derived from the early colidar systems.
The first practical terrestrial application of a colidar system was the "Colidar Mark II", a large rifle-like laser rangefinder produced in 1963, which had a range of 11 km and an accuracy of 4.5 m, to be used for military targeting. The first mention of lidar as a stand-alone word in 1963 suggests that it originated as a portmanteau of "light" and "radar": "Eventually the laser may provide an extremely sensitive detector of particular wavelengths from distant objects. Meanwhile, it is being used to study the Moon by 'lidar' (light radar) ..."James Ring, "The Laser in Astronomy", pp. 672–673, New Scientist, June 20, 1963. The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar.
Lidar's first applications were in meteorology, for which the National Center for Atmospheric Research used it to measure clouds and pollution. The general public became aware of the accuracy and usefulness of lidar systems in 1971 during the Apollo 15 mission, when astronauts used a laser altimeter to map the surface of the Moon. Although the English language no longer treats "radar" as an acronym, (i.e., uncapitalized), the word "lidar" was capitalized as "LIDAR" or "LiDAR" in some publications beginning in the 1980s. No consensus exists on capitalization. Various publications refer to lidar as "LIDAR", "LiDAR", "LIDaR", or "Lidar". The USGS uses both "LIDAR" and "lidar", sometimes in the same document; the New York Times predominantly uses "lidar" for staff-written articles, although contributing news feeds such as Reuters may use Lidar.
Wavelengths vary to suit the target: from about 10 micrometers (infrared) to approximately 250 (ultraviolet). Typically, light is reflected via , as opposed to pure reflection one might find with a mirror. Different types of scattering are used for different lidar applications: most commonly Rayleigh scattering, Mie scattering, Raman scattering, and fluorescence. Suitable combinations of wavelengths can allow remote mapping of atmospheric contents by identifying wavelength-dependent changes in the intensity of the returned signal. The name "photonic radar" is sometimes used to mean visible-spectrum range finding like lidar, although photonic radar more strictly refers to radio-frequency range finding using photonics components.
A lidar determines the distance of an object or a surface with the formula:
where c is the speed of light, d is the distance between the detector and the object or surface being detected, and t is the time spent for the laser light to travel to the object or surface being detected, then travel back to the detector.
The two kinds of lidar detection schemes are "incoherent" or direct energy detection (which principally measures amplitude changes of the reflected light) and coherent detection (best for measuring Doppler effect shifts, or changes in the phase of the reflected light). Coherent systems generally use optical heterodyne detection. This is more sensitive than direct detection and allows them to operate at much lower power, but requires more complex transceivers.
Both types employ pulse models: either micropulse or high energy. Micropulse systems utilize intermittent bursts of energy. They developed as a result of ever-increasing computer power, combined with advances in laser technology. They use considerably less energy in the laser, typically on the order of one microjoule, and are often "eye-safe", meaning they can be used without safety precautions. High-power systems are common in atmospheric research, where they are widely used for measuring atmospheric parameters: the height, layering and densities of clouds, cloud particle properties (extinction coefficient, backscatter coefficient, depolarization), temperature, pressure, wind, humidity, and trace gas concentration (ozone, methane, nitrous oxide, etc.).
One common alternative, 1,550 nm lasers, are eye-safe at relatively high power levels since this wavelength is strongly absorbed by water and barely reaches the retina, though camera sensors can still get damaged. A trade-off though is that current detector technology is less advanced, so these wavelengths are generally used at longer ranges with lower accuracies. They are also used for military applications because 1,550 nm is not visible in night vision goggles, unlike the shorter 1,000 nm infrared laser.
Airborne topographic mapping lidars generally use 1,064 nm diode-pumped YAG lasers, while bathymetric (underwater depth research) systems generally use 532 nm frequency-doubled diode pumped YAG lasers because 532 nm penetrates water with much less attenuation than 1,064 nm. Laser settings include the laser repetition rate (which controls the data collection speed). Pulse length is generally an attribute of the laser cavity length, the number of passes required through the gain material (YAG, YLF, etc.), and Q-switch (pulsing) speed. Better target resolution is achieved with shorter pulses, provided the lidar receiver detectors and electronics have sufficient bandwidth.
A phased array can illuminate any direction by using a microscopic array of individual antennas. Controlling the timing (phase) of each antenna steers a cohesive signal in a specific direction. Phased arrays have been used in radar since the 1940s. On the order of a million optical antennas are used to see a radiation pattern of a certain size in a certain direction. To achieve this the phase of each individual antenna (emitter) are precisely controlled. It is very difficult, if possible at all, to use the same technique in a lidar. The main problems are that all individual emitters must be coherent (technically coming from the same "master" oscillator or laser source), have dimensions about the wavelength of the emitted light (1 micron range) to act as a point source with their phases being controlled with high accuracy.
MEMS are not entirely solid-state. However, their tiny form factor provides many of the same cost benefits. A single laser is directed to a single mirror that can be reoriented to view any part of the target field. The mirror spins at a rapid rate. However, MEMS systems generally operate in a single plane (left to right). To add a second dimension generally requires a second mirror that moves up and down. Alternatively, another laser can hit the same mirror from another angle. MEMS systems can be disrupted by shock/vibration and may require repeated calibration.
3-D imaging can be achieved using both scanning and non-scanning systems. "3-D gated viewing laser radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated camera. Research has begun for virtual beam steering using Digital Light Processing (DLP) technology.
Imaging lidar can also be performed using arrays of high speed detectors and modulation sensitive detector arrays typically built on single chips using complementary metal–oxide–semiconductor (CMOS) and hybrid CMOS/charge-coupled device (CCD) fabrication techniques. In these devices each pixel performs some local processing such as demodulation or gating at high speed, downconverting the signals to video rate so that the array can be read like a camera. Using this technique many thousands of pixels / channels may be acquired simultaneously. High resolution 3-D lidar cameras use homodyne detection with an electronic CCD or CMOS shutter.
A coherent imaging lidar uses synthetic array heterodyne detection to enable a staring single element receiver to act as though it were an imaging array.
In 2014, Lincoln Laboratory announced a new imaging chip with more than 16,384 pixels, each able to image a single photon, enabling them to capture a wide area in a single image. An earlier generation of the technology with one fourth as many pixels was dispatched by the U.S. military after the January 2010 Haiti earthquake. A single pass by a business jet at over Port-au-Prince was able to capture instantaneous snapshots of squares of the city at a resolution of , displaying the precise height of rubble strewn in city streets. The new system is ten times better, and could produce much larger maps more quickly. The chip uses indium gallium arsenide (InGaAs), which operates in the infrared spectrum at a relatively long wavelength that allows for higher power and longer ranges. In many applications, such as self-driving cars, the new system will lower costs by not requiring a mechanical component to aim the chip. InGaAs uses less hazardous wavelengths than conventional silicon detectors, which operate at visual wavelengths. New technologies for infrared Photon counting lidar are advancing rapidly, including arrays and cameras in a variety of semiconductor and superconducting platforms.
Laser projections of lidars can be manipulated using various methods and mechanisms to produce a scanning effect: the standard spindle-type, which spins to give a 360-degree view; solid-state lidar, which has a fixed field of view, but no moving parts, and can use either MEMS or optical phased arrays to steer the beams; and flash lidar, which spreads a flash of light over a large field of view before the signal bounces back to a detector.
Lidar applications can be divided into airborne and terrestrial types.
Airborne lidar (also airborne laser scanning) is when a laser scanner, while attached to an aircraft during flight, creates a Point cloud model of the landscape. This is currently the most detailed and accurate method of creating digital elevation models, replacing photogrammetry. One major advantage in comparison with photogrammetry is the ability to filter out reflections from vegetation from the point cloud model to create a digital terrain model which represents ground surfaces such as rivers, paths, cultural heritage sites, etc., which are concealed by trees. Within the category of airborne lidar, there is sometimes a distinction made between high-altitude and low-altitude applications, but the main difference is a reduction in both accuracy and point density of data acquired at higher altitudes. Airborne lidar can also be used to create bathymetric models in shallow water.
The main constituents of airborne lidar include digital elevation models (DEM) and digital surface models (DSM). The points and ground points are the vectors of discrete points while DEM and DSM are interpolated raster grids of discrete points. The process also involves capturing of digital aerial photographs. To interpret deep-seated landslides for example, under the cover of vegetation, scarps, tension cracks or tipped trees airborne lidar is used. Airborne lidar digital elevation models can see through the canopy of forest cover, perform detailed measurements of scarps, erosion and tilting of electric poles.
Airborne lidar data is processed using a toolbox called Toolbox for Lidar Data Filtering and Forest Studies (TIFFS) for lidar data filtering and terrain study software. The data is interpolated to digital terrain models using the software. The laser is directed at the region to be mapped and each point's height above the ground is calculated by subtracting the original z-coordinate from the corresponding digital terrain model elevation. Based on this height above the ground the non-vegetation data is obtained which may include objects such as buildings, electric power lines, flying birds, insects, etc. The rest of the points are treated as vegetation and used for modeling and mapping. Within each of these plots, lidar metrics are calculated by calculating statistics such as mean, standard deviation, skewness, percentiles, quadratic mean, etc.
Multiple commercial lidar systems for unmanned aerial vehicles are currently on the market. These platforms can systematically scan large areas, or provide a cheaper alternative to manned aircraft for smaller scanning operations.
The airborne lidar bathymetric technological system involves the measurement of time of flight of a signal from a source to its return to the sensor. The data acquisition technique involves a sea floor mapping component and a ground truth component that includes video transects and sampling. It works using a green spectrum (532 nm) laser beam. Two beams are projected onto a fast rotating mirror, which creates an array of points. One of the beams penetrates the water and also detects the bottom surface of the water under favorable conditions.
Water depth measurable by lidar depends on the clarity of the water and the absorption of the wavelength used. Water is most transparent to green and blue light, so these will penetrate deepest in clean water. Blue-green light of 532 nm produced by frequency doubled solid-state IR laser output is the standard for airborne bathymetry. This light can penetrate water but pulse strength attenuates exponentially with distance traveled through the water. Lidar can measure depths from about , with vertical accuracy in the order of . The surface reflection makes water shallower than about difficult to resolve, and absorption limits the maximum depth. Turbidity causes scattering and has a significant role in determining the maximum depth that can be resolved in most situations, and dissolved pigments can increase absorption depending on wavelength. Other reports indicate that water penetration tends to be between two and three times Secchi depth. Bathymetric lidar is most useful in the depth range in coastal mapping.
On average in fairly clear coastal seawater lidar can penetrate to about , and in turbid water up to about . An average value found by Saputra et al, 2021, is for the green laser light to penetrate water about one and a half to two times Secchi depth in Indonesian waters. Water temperature and salinity have an effect on the refractive index which has a small effect on the depth calculation.
The data obtained shows the full extent of the land surface exposed above the sea floor. This technique is extremely useful as it will play an important role in the major sea floor mapping program. The mapping yields onshore topography as well as underwater elevations. Sea floor reflectance imaging is another solution product from this system which can benefit mapping of underwater habitats. This technique has been used for three-dimensional image mapping of California's waters using a hydrographic lidar.
Airborne lidar systems were traditionally able to acquire only a few peak returns, while more recent systems acquire and digitize the entire reflected signal. Scientists analysed the waveform signal for extracting peak returns using Gaussian decomposition. Zhuang et al, 2017 used this approach for estimating aboveground biomass. Handling the huge amounts of full-waveform data is difficult. Therefore, Gaussian decomposition of the waveforms is effective, since it reduces the data and is supported by existing workflows that support interpretation of 3-D . Recent studies investigated . The intensities of the waveform samples are inserted into a voxelised space (3-D grayscale image) building up a 3-D representation of the scanned area.
Terrestrial applications of lidar (also terrestrial laser scanning) happen on the Earth's surface and can be either stationary or mobile. Stationary terrestrial scanning is most common as a survey method, for example in conventional topography, monitoring, cultural heritage documentation and forensics. The Point cloud acquired from these types of scanners can be matched with taken of the scanned area from the scanner's location to create realistic looking 3-D models in a relatively short time when compared to other technologies. Each point in the point cloud is given the colour of the pixel from the image taken at the same location and direction as the laser beam that created the point.
Terrestrial lidar mapping involves a process of occupancy grid map generation. The process involves an array of cells divided into grids which employ a process to store the height values when lidar data falls into the respective grid cell. A binary map is then created by applying a particular threshold to the cell values for further processing. The next step is to process the radial distance and z-coordinates from each scan to identify which 3-D points correspond to each of the specified grid cell leading to the process of data formation.
Mobile lidar (also mobile laser scanning) is when two or more scanners are attached to a moving vehicle to collect data along a path. These scanners are almost always paired with other kinds of equipment, including GNSS receivers and IMUs. One example application is surveying streets, where power lines, exact bridge heights, bordering trees, etc. all need to be taken into account. Instead of collecting each of these measurements individually in the field with a tachymeter, a 3-D model from a point cloud can be created where all of the measurements needed can be made, depending on the quality of the data collected. This eliminates the problem of forgetting to take a measurement, so long as the model is available, reliable and has an appropriate level of accuracy.
Companies are working to cut the cost of lidar sensors, currently anywhere from about US$1,200 to more than $12,000. Lower prices will make lidar more attractive for new markets.
Lidar can help determine where to apply costly fertilizer. It can create a topographical map of the fields and reveal slopes and sun exposure of the farmland. Researchers at the Agricultural Research Service used this topographical data with the farmland yield results from previous years, to categorize land into zones of high, medium, or low yield. This indicates where to apply fertilizer to maximize yield.
Lidar is now used to monitor insects in the field. The use of lidar can detect the movement and behavior of individual flying insects, with identification down to sex and species. In 2017 a patent application was published on this technology in the United States, Europe, and China.
Another application is crop mapping in orchards and vineyards, to detect foliage growth and the need for pruning or other maintenance, detect variations in fruit production, or count plants.
Lidar is useful in GNSS-denied situations, such as nut and fruit orchards, where foliage causes interference for agriculture equipment that would otherwise utilize a precise GNSS fix. Lidar sensors can detect and track the relative position of rows, plants, and other markers so that farming equipment can continue operating until a GNSS fix is reestablished.
Controlling weeds requires identifying plant species. This can be done by using 3-D lidar and machine learning.
Lidar can also help to create high-resolution digital elevation models (DEMs) of archaeological sites that can reveal micro-topography that is otherwise hidden by vegetation. The intensity of the returned lidar signal can be used to detect features buried under flat vegetated surfaces such as fields, especially when mapping using the infrared spectrum. The presence of these features affects plant growth and thus the amount of infrared light reflected back. For example, at Fort Beauséjour – Fort Cumberland National Historic Site, Canada, lidar discovered archaeological features related to the siege of the Fort in 1755. Features that could not be distinguished on the ground or through aerial photography were identified by overlaying hill shades of the DEM created with artificial illumination from various angles. Another example is work at Caracol by Arlen Chase and his wife Diane Zaino Chase. In 2012, lidar was used to search for the legendary city of La Ciudad Blanca or "City of the Monkey God" in the La Mosquitia region of the Honduran jungle. During a seven-day mapping period, evidence was found of man-made structures. In June 2013, the rediscovery of the city of Mahendraparvata was announced. In southern New England, lidar was used to reveal stone walls, building foundations, abandoned roads, and other landscape features obscured in aerial photography by the region's dense forest canopy. In Cambodia, lidar data was used by Damian Evans and Roland Fletcher to reveal anthropogenic changes to Angkor landscape.
In 2012, lidar revealed that the Purépecha settlement of Angamuco in Michoacán, Mexico had about as many buildings as today's Manhattan; while in 2016, its use in mapping ancient Maya causeways in northern Guatemala, revealed 17 elevated roads linking the ancient city of El Mirador to other sites. In 2018, archaeologists using lidar discovered more than 60,000 man-made structures in the Maya Biosphere Reserve, a "major breakthrough" that showed the Maya civilization was much larger than previously thought. In 2024, archaeologists using lidar discovered the Upano Valley sites.
The very first generations of automotive adaptive cruise control systems used only lidar sensors.
In transportation systems, to ensure vehicle and passenger safety and to develop electronic systems that deliver driver assistance, understanding the vehicle and its surrounding environment is essential. Lidar systems play an important role in the safety of transportation systems. Many electronic systems which enhance driver assistance and vehicle safety such as Adaptive Cruise Control (ACC), Emergency Brake Assist, and Anti-lock Braking System (ABS) depend on the detection of a vehicle's environment to act autonomously or semi-autonomously. Lidar mapping and estimation achieve this.
Current lidar systems use rotating hexagonal mirrors which split the laser beam. The upper three beams are used for vehicle and obstacles ahead, and the lower beams are used to detect lane markings and road features.
Obstacle detection and road environment recognition using lidar, proposed by Kun Zhou et al. not only focuses on object detection and tracking but also recognizes lane marking and road features. As mentioned earlier, the lidar systems use rotating hexagonal mirrors that split the laser beam into six beams. The upper three layers are used to detect the forward objects such as vehicles and roadside objects. The sensor is made of weather-resistant material. The data detected by lidar are clustered to several segments and tracked by Kalman filter. Data clustering here is done based on characteristics of each segment based on object model, which distinguish different objects such as vehicles, signboards, etc. These characteristics include the dimensions of the object, etc. The reflectors on the rear edges of vehicles are used to differentiate vehicles from other objects. Object tracking is done using a two-stage Kalman filter considering the stability of tracking and the accelerated motion of objects Lidar reflective intensity data is also used for curb detection by making use of robust regression to deal with occlusions. Road marking is detected using a modified Otsu method by distinguishing rough and shiny surfaces.
Roadside reflectors that indicate lane border are sometimes hidden due to various reasons. Therefore, other information is needed to recognize the road border. The lidar used in this method can measure the reflectivity from the object. Hence, with this data the road border can also be recognized. Also, the usage of a sensor with a weather-robust head helps to detect the objects even in bad weather conditions. Canopy Height Model before and after a flood is a good example. Lidar can detect highly detailed canopy height data as well as its road border. Lidar measurements help identify the spatial structure of the obstacle. This helps distinguish objects based on size and estimate the impact of driving over it.
Lidar is also used in structural geology and geophysics as a combination between airborne lidar and GNSS for the detection and study of faults, for measuring Tectonic uplift. The output of the two technologies can produce extremely accurate elevation models for terrain – models that can even measure ground elevation through trees. This combination was used most famously to find the location of the Seattle Fault in Washington, United States. This combination also measures uplift at Mount St. Helens by using data from before and after the 2004 uplift. 'Mount Saint Helens LIDAR Data', Washington State Geospatial Data Archive (September 13, 2006). Retrieved 8 August 2007. Airborne lidar systems monitor and have the ability to detect subtle amounts of growth or decline. A satellite-based system, the NASA ICESat, includes a lidar sub-system for this purpose. The NASA Airborne Topographic Mapper 'Airborne Topographic Mapper', NASA.gov. Retrieved 8 August 2007. is also used extensively to monitor and perform coastal change analysis. The combination is also used by soil scientists while creating a soil survey. The detailed terrain modeling allows soil scientists to see slope changes and landform breaks which indicate patterns in soil spatial relationships.
Atmospheric lidar remote sensing works in two ways:
Backscatter from the atmosphere directly gives a measure of clouds and aerosols. Other derived measurements from backscatter such as winds or cirrus ice crystals require careful selecting of the wavelength and/or polarization detected. Doppler lidar and Rayleigh Doppler lidar are used to measure temperature and wind speed along the beam by measuring the frequency of the backscattered light. The Doppler broadening of gases in motion allows the determination of properties via the resulting frequency shift. Scanning lidars, such as NASA's conical-scanning HARLIE, have been used to measure atmospheric wind velocity. Thomas D. Wilkerson, Geary K. Schwemmer, and Bruce M. Gentry. LIDAR Profiling of Aerosols, Clouds, and Winds by Doppler and Non-Doppler Methods, NASA International H2O Project (2002) . The ESA wind mission ADM-Aeolus will be equipped with a Doppler lidar system in order to provide global measurements of vertical wind profiles. 'Earth Explorers: ADM-Aeolus', ESA.org (European Space Agency, 6 June 2007). Retrieved 8 August 2007. A doppler lidar system was used in the 2008 Summer Olympics to measure wind fields during the yacht competition. 'Doppler lidar gives Olympic sailors the edge', Optics.org (3 July, 2008). Retrieved 8 July 2008.
Doppler lidar systems are also now beginning to be successfully applied in the renewable energy sector to acquire wind speed, turbulence, wind veer, and wind shear data. Both pulsed and continuous wave systems are being used. Pulsed systems use signal timing to obtain vertical distance resolution, whereas continuous wave systems rely on detector focusing.
The term eolics has been proposed to describe the collaborative and interdisciplinary study of wind using computational fluid mechanics simulations and Doppler lidar measurements.Clive, P. J. M., The emergence of eolics, TEDx University of Strathclyde (2014). Retrieved 9 May 2014.
The ground reflection of an airborne lidar gives a measure of surface reflectivity (assuming the atmospheric transmittance is well known) at the lidar wavelength, however, the ground reflection is typically used for making absorption measurements of the atmosphere. "Differential absorption lidar" (DIAL) measurements utilize two or more closely spaced (less than 1 nm) wavelengths to factor out surface reflectivity as well as other transmission losses, since these factors are relatively insensitive to wavelength. When tuned to the appropriate absorption lines of a particular gas, DIAL measurements can be used to determine the concentration (mixing ratio) of that particular gas in the atmosphere. This is referred to as an Integrated Path Differential Absorption (IPDA) approach, since it is a measure of the integrated absorption along the entire lidar path. IPDA lidars can be either pulsed or CW and typically use two or more wavelengths. IPDA lidars have been used for remote sensing of carbon dioxide and methane.
Synthetic array lidar allows imaging lidar without the need for an array detector. It can be used for imaging Doppler velocimetry, ultra-fast frame rate imaging (millions of frames per second), as well as for speckle pattern reduction in coherent lidar. An extensive lidar bibliography for atmospheric and hydrospheric applications is given by Grant.Grant, W. B., Lidar for atmospheric and hydrospheric studies, in Tunable Laser Applications, 1st Edition, Duarte, F. J. Ed. (Marcel Dekker, New York, 1995) Chapter 7.
As part of the Cabinet Office's Bridging Research and Development and Society 5.0 (BRIDGE) Program, a multi-institutional research consortium led by Kyushu University—and including EKO Instruments Co. Ltd. Kyoto University, and several national universities and institutes—was selected by the Kyushu Regional Development Bureau of the Ministry of Land, Infrastructure, Transport and Tourism MLIT conduct advanced research on meteorological sensing and flood risk modelling. BRIDGE The project focuses on:
In May 2025, EKO Instruments began a field study on Goto Fukue Island, Nagasaki Prefecture, using a Micropulse DIAL lidar system provided by the National Center for Atmospheric Research (NSF NCAR). The study compares DIAL performance with existing Raman lidar systems.
In addition, EKO Instruments. Co. Ltd. signed a technology license agreement in February 2025 with Montana State University, NSF NCAR, and NASA, covering key DIAL-related patents. The company is working toward commercializing a compact and efficient lidar system by 2026. EKO Technology license
While till June 2025 much of the work is focused on Japan’s flood-prone regions such as Kyushu, EKO Instruments Co. Ltd. has stated its intent to contribute to global flood resilience, targeting other vulnerable regions including the United States and Europe, where extreme weather and flooding events are increasingly common. The goal is to expand the application of advanced atmospheric lidar and AI forecasting systems to support international disaster preparedness and mitigation efforts.
The initiative has received national attention, with coverage by NHK Fukuoka NHK and the Nikkan Kogyo Shimbun, and is expected to play a key role in developing next-generation weather forecasting infrastructure.
A NATO report (RTO-TR-SET-098) evaluated the potential technologies to do stand-off detection for the discrimination of biological warfare agents. The potential technologies evaluated were Long-Wave Infrared (LWIR), Differential Scattering (DISC), and Ultraviolet Laser Induced Fluorescence (UV-LIF). The report concluded that : Based upon the results of the lidar systems tested and discussed above, the Task Group recommends that the best option for the near-term (2008–2010) application of stand-off detection systems is UV-LIF , however, in the long-term, other techniques such as stand-off Raman spectroscopy may prove to be useful for identification of biological warfare agents.
Short-range compact spectrometric lidar based on Laser-Induced Fluorescence (LIF) would address the presence of bio-threats in aerosol form over critical indoor, semi-enclosed and outdoor venues such as stadiums, subways, and airports. This near real-time capability would enable rapid detection of a bioaerosol release and allow for timely implementation of measures to protect occupants and minimize the extent of contamination.
The Long-Range Biological Standoff Detection System (LR-BSDS) was developed for the U.S. Army to provide the earliest possible standoff warning of a biological attack. It is an airborne system carried by helicopter to detect synthetic aerosol clouds containing biological and chemical agents at long range. The LR-BSDS, with a detection range of 30 km or more, was fielded in June 1997. Five lidar units produced by the German company Sick AG were used for short range detection on Stanley, the driverless car that won the 2005 DARPA Grand Challenge.
A robotic Boeing AH-6 performed a fully autonomous flight in June 2010, including avoiding obstacles using lidar.Spice, Byron. Researchers Help Develop Full-Size Autonomous Helicopter Carnegie Mellon, 6 July 2010. Retrieved: 19 July 2010.Koski, Olivia. In a First, Full-Sized Robo-Copter Flies With No Human Help Wired, 14 July 2010. Retrieved: 19 July 2010.
Lidar sensors may also be used for obstacle detection and avoidance for robotic mining vehicles such as in the Komatsu Autonomous Haulage System (AHS)Modular Mining Systems#Autonomous Haulage Systems used in Rio Tinto's Mine of the Future.
In September, 2008, the NASA Phoenix lander used lidar to detect snow in the atmosphere of Mars. NASA. 'NASA Mars Lander Sees Falling Snow, Soil Data Suggest Liquid Past' NASA.gov (29 September 2008) . Retrieved 9 November 2008.
In atmospheric physics, lidar is used as a remote detection instrument to measure densities of certain constituents of the middle and upper atmosphere, such as potassium, sodium, or molecular nitrogen and oxygen. These measurements can be used to calculate temperatures. Lidar can also be used to measure wind speed and to provide information about vertical distribution of the aerosol particles.
At the JET nuclear fusion research facility in the UK near Abingdon, Oxfordshire, lidar Thomson scattering is used to determine electron density and temperature profiles of the plasma. CW Gowers. ' Focus On : Lidar-Thomson Scattering Diagnostic on JET' JET.EFDA.org (undated). Retrieved 8 August 2007.
Some of these properties have been used to assess the geomechanical quality of the rock mass through the RMR index. Moreover, as the orientations of discontinuities can be extracted using the existing methodologies, it is possible to assess the geomechanical quality of a rock slope through the SMR index. In addition to this, the comparison of different 3-D point clouds from a slope acquired at different times allows researchers to study the changes produced on the scene during this time interval as a result of rockfalls or any other landsliding processes.
Laser altimetry is used to make digital elevation maps of planets, including the Mars Orbital Laser Altimeter (MOLA) mapping of Mars,Bruce Banerdt, Orbital Laser Altimeter, The Martian Chronicle, Volume 1, No. 3, NASA.gov. Retrieved 11 March 2019. the Lunar Orbital Laser Altimeter (LOLA)NASA, LOLA. Retrieved 11 March 2019. and Lunar Altimeter (LALT) mapping of the Moon, and the Mercury Laser Altimeter (MLA) mapping of Mercury.John F. Cavanaugh, et al., " The Mercury Laser Altimeter Instrument for the MESSENGER Mission", Space Sci Rev, , 24 August 2007. Retrieved 11 March 2019. It is also used to help navigate the helicopter Ingenuity in its record-setting flights over the terrain of Mars.
Lidar is also in use in hydrographic surveying. Depending upon the clarity of the water lidar can measure depths from with a vertical accuracy of and horizontal accuracy of .
The 2017 exploration game Scanner Sombre, by Introversion Software, uses lidar as a fundamental game mechanic.
In Build the Earth, lidar is used to create accurate renders of terrain in Minecraft to account for any errors (mainly regarding elevation) in the default generation. The process of rendering terrain into Build the Earth is limited by the amount of data available in region as well as the speed it takes to convert the file into block data.
In 2020, Apple introduced the fourth generation of iPad Pro with a lidar sensor integrated into the rear camera module, especially developed for augmented reality (AR) experiences. The feature was later included in the iPhone 12 Pro lineup and subsequent Pro models. On Apple devices, lidar empowers portrait mode pictures with night mode, quickens Autofocus and improves accuracy in the Measure app.
In 2022, Wheel of Fortune started using lidar technology to track when Vanna White moves her hand over the puzzle board to reveal letters. The first episode to have this technology was in the season 40 premiere.
As with all forms of lidar, the onboard source of illumination makes flash lidar an active sensor. The signal that is returned is processed by embedded algorithms to produce a nearly instantaneous 3-D rendering of objects and terrain features within the field of view of the sensor. The laser pulse repetition frequency is sufficient for generating 3-D videos with high resolution and accuracy. The high frame rate of the sensor makes it a useful tool for a variety of applications that benefit from real-time visualization, such as highly precise remote landing operations. By immediately returning a 3-D elevation mesh of target landscapes, a flash sensor can be used to identify optimal landing zones in autonomous spacecraft landing scenarios.Dietrich, Ann Brown, "Supporting Autonomous Navigation with Flash Lidar Images in Proximity to Small Celestial Bodies" (2017). CU Boulder Aerospace Engineering Sciences Graduate Theses & Dissertations. 178.
Seeing at a distance requires a powerful burst of light. The power is limited to levels that do not damage human retinas. Wavelengths must not affect human eyes. However, low-cost silicon imagers do not read light in the eye-safe spectrum. Instead, Gallium arsenide imagers are required, which can boost costs to $200,000. Gallium-arsenide is the same compound used to produce high-cost, high-efficiency solar panels usually used in space applications.
|
|